UX/UI Project · Enterprise Tool

Allinonetool

Camera Testing Tool 2.0

Redesigning how Microsoft's device teams validate camera quality —
from fragmented spreadsheets to one decision-ready platform.

Role
UX/UI Designer
Timeline
Sep 2025 – Current
Scope
Research · System · Design
Overview

Camera quality determines
which devices reach 10 million people.

Microsoft's device program validates camera performance for every Windows-certified device before it ships — across 15+ vendors and 80+ quality metrics per device. Camera Testing Tool 2.0 is the platform I redesigned to make that validation process faster, clearer, and more consistent.

10M+
Devices worldwide
15+
Vendors / OEMs
80+
Quality metrics per device
Camera Testing Tool 2.0 — interface preview
Platform
Windows desktop enterprise tool (WinUI)
Company
Microsoft — Device Quality Program
My Role
End-to-end UX — research, IA, wireframes, UI design, design system
Timeline
Sep 2025 – Current
Program Manager

Oversees vendor relationships and makes final validation calls.

Core frustration

Producing a vendor report required pulling results from separate screens and pasting into an external document — hours wasted, inconsistency introduced every cycle.

Camera Validation Tester

Runs quality tests and interprets metric results per device.

Core frustration

Interpreting results required constantly switching to an external reference sheet to look up thresholds — slowing analysis and making it easy to miss boundary failures.

The Problem

A pipeline this critical
was running on manual steps.

Every cycle meant jumping between tools that were never meant to work together — uploading device data in one place, looking up thresholds in a separate document, cross-checking per-device reports, then hand-compiling everything into a vendor deliverable.

  • Inconsistent pass/fail interpretations across testers
  • Failures missed at the threshold boundary
  • Vendor reports took hours to produce manually
  • The old tool surfaced data — it didn't help anyone make a decision
Legacy tool — fragmented workflow
Insights

The tool had data.
It didn't have decisions.

Three methods — stakeholder interview, live workflow observation, and a legacy tool audit — all pointed to the same root cause: the tool made information available, but never helped testers act on it.

N=1 interview participant — every qualitative finding was cross-validated against workflow observation or the tool audit before informing a design decision.

Finding 01

Testers lived between tools, not inside one

Reviewing results across multiple devices required switching between separate per-device reports — no unified view existed.

Finding 02

Numbers without context slow decisions

Pass/fail thresholds weren't visible in the results view. Testers had to leave the tool to interpret every metric value.

Finding 03

Every answer required a workaround

Teams compensated for tool gaps with manual steps — comparing metrics in spreadsheets, summarising results by hand for stakeholders.

Finding 04

No screen showed the full picture

There was no consolidated view of device health across a vendor set — understanding overall quality required assembling information manually.

Design Goals

Four findings.
Three clear goals.

Every finding pointed to the same gap: the tool surfaced data but didn't structure it around decisions. These three goals shaped every decision that followed — from information architecture down to individual components.

01

Surface failures before detail

Critical failures and device health visible before any drill-down. Decision-critical signals always above the fold — never buried in rows.

02

Make comparison effortless

Evaluate multiple devices in a single view — no tab-switching, no manual assembly. The comparison is the default state, not an extra step.

03

Embed context, not just data

Threshold guidance lives inside the results view. Testers interpret and decide without leaving the screen — no external reference needed.

Solution

Four decisions.
Each traced to a finding.

The IA was rebuilt around four sequential screens — each owning exactly one moment in the workflow. Every design choice below maps directly to a validated research finding.

Step 01
File Setup
Batch import — all devices configured together, not one by one.
Step 02
Run & Analyze
Validate and review metrics — thresholds embedded directly in the view.
Step 03
Overview
Full-set health summary — pass rate, failures, vendor comparison at a glance.
Step 04
Export Report
Auto-generated vendor deliverable — no manual copy-paste.
Decision 01
Multi-Device Batch Import
FindingTesters configured validation runs device-by-device — repeating the same setup steps for every hardware variant in a cycle, introducing inconsistency and wasting time.
DecisionReplaced single-device upload with a unified batch import flow. All devices are configured together in one session, with shared settings applied across the set.
OutcomeConsistent run configurations across devices. Reduced setup friction and eliminated a class of errors caused by per-device manual entry.
UI — Multi-Device Batch Import
Decision 02
Threshold Context Embedded in Results
FindingTesters couldn't interpret metric values without knowing the threshold — so they constantly switched to an external reference sheet, breaking focus and slowing analysis.
DecisionRedesigned the results view to embed threshold context, pass/fail indicators, and boundary proximity directly alongside every metric value.
OutcomeTesters can read and interpret results without leaving the screen. Faster analysis and fewer missed failures at the boundary.
UI — Results with threshold context (view 1) UI — Results with threshold context (view 2)
Decision 03
Cross-Device Comparison in One View
FindingComparing metric performance across devices required opening separate per-device reports — a fragmented experience that made spotting patterns across a vendor set nearly impossible.
DecisionIntroduced a unified comparison layer with tabbed metric categories and device toggles, enabling side-by-side analysis without switching screens.
OutcomeReduced time-to-insight on cross-device analysis. Decision-critical signals visible at a glance, not buried in sequential reports.
UI — Cross-device comparison view
Decision 04
Auto-Generated Vendor Report
FindingAfter analysis, PMs manually assembled validation results into a vendor deliverable — copying data out of the tool into a separate document, a process that took hours and varied in format.
DecisionDesigned a final reporting layer that transforms validated results directly into a structured, vendor-ready summary. One action — no manual compilation.
OutcomeReport generation goes from hours to minutes. Consistent format across all vendor deliverables, regardless of who creates them.
UI — Report summary and export
User Flows

Four flows. Every decision mapped.

Each flow traces a distinct user goal through the platform — with branching decision logic made explicit at every critical step.

Flow 01 · Primary
Core Test Execution
Flow 01 — Core Test Execution
Flow 02
Multiple Devices Testing
Flow 02 — Multiple Devices Testing
Flow 03
Metrics Analysis
Flow 03 — Metrics Analysis
Flow 04
Check Testing Result
Flow 04 — Check Testing Result
Flow 05
Quick Decision
Flow 05 — Quick Decision
Design System

Built for data density,
not decoration.

A validation interface displays dozens of metrics per screen. Every token and component was chosen to reduce visual noise — not for aesthetics. Color encodes status (pass / fail / at-risk) so testers read results at a glance. The typography scale ensures dense data tables stay scannable without feeling cluttered.

Design system — color tokens Design system — typography scale Design system — components Design system — status patterns
Impact

Fewer screens.
Faster decisions. Less guesswork.

Metrics estimated from workflow observation and PM review. Formal usability testing planned for Q2 2027.

Outcomes overview — Camera Testing Tool 2.0
70%
Estimated reduction in cross-device comparison time per validation cycle.
Workflow observation
1
Screen to reach metric-level results — down from 3+ screens in the legacy tool.
IA restructure
80%
Of the fragmented workflow steps consolidated into a single structured platform.
PM-confirmed
ML
Mark Lin
Program Manager · Microsoft
"

Kochakorn demonstrated strong system thinking and quickly understood complex technical constraints, translating them into a clear, structured interface that improved metric visibility and usability. Her ability to simplify dense data views while maintaining accuracy was notable — as was her approach combining product mindset with execution discipline throughout the project.

Reflection

What this project
sharpened.

Simplicity is a functional requirement

In enterprise validation tools, every extra click scales into real errors. Reducing cognitive load isn't a preference — it's the most critical thing the interface can do.

Constraints make research stronger

N=1 access forced every insight to be cross-validated before informing a decision. The constraint raised the bar — findings had to earn their way in, not just appear.

What's next.

Moderated usability tests

Run sessions with 3–5 engineers to validate the cross-device comparison flow and metric interpretation against real tasks.

Accessibility audit

Review the color-coded status system against WCAG contrast standards — critical where color encodes pass/fail decisions across 80+ metrics.